Anyone can wrap GPT around a Wikipedia page and call it an AI agent. Most do. Agentify.Help is the protocol for building expert personas that the real expert would recognize — grounded in corpus, true to framework, accountable by name.
Every credible expert will eventually have dozens of AI clones. Most will be slop — averaged positions, hallucinated citations, broken voice. The people who need the real framework can't find it. The expert's reputation erodes through a thousand bad proxies they never authorized.
Built from Wikipedia summaries and top-3 Google results. Sounds authoritative. Has no grounding. Cites sources that don't exist. Gets the expert's actual position backwards on half the questions.
Nobody is accountable. The steward is anonymous. The corpus is undisclosed. The framework is averaged from the training data.
Built from the expert's own published work — papers, lectures, interviews, grey literature. Framework extracted at the reasoning level, not the summary level. Citations resolve to real passages.
Someone is accountable. The steward is named. The corpus is versioned. The framework is distilled, not averaged.
A WellAgent is recognizable. Someone who knows the expert's work reads a response and says yes, that's them. Four things make that possible.
Built from published works, not summaries. 1,000+ chunked passages. Every citation points to a real document.
Not "what does GPT say about this expert" but "what does this expert's actual reasoning framework say about this question."
Someone is accountable for quality. A named steward built this, maintains it, and can be held responsible for errors.
The expert's institutional affiliations are visible next to the agent. Who the agent serves is not a mystery.
The pipeline is the same for every expert — a cardiologist, a theologian, a craftsman, a historian. The corpus source changes. The stages don't.
agentify.help/registry — Check availabilityThe richest, most representative agents are not assembled by one person in a weekend. They are built from a broad base of source material gathered by multiple contributors over a structured window of time. This is the recommended pattern for any expert with a large or multi-domain body of work.
The steward sets a deadline and invites contributors — research assistants, citing authors, domain peers, or curators who know specific sub-literatures. Each contributor is assigned a source domain (e.g., cardiology papers, conference talks, clinical guidelines).
Contributors assemble their artifact sets asynchronously — no coordination needed mid-flight. Each bundle lands in WellSpr.ing staging with full provenance: who assembled it, from which sources, at what dates. No AI-generated summaries. No undated material. Real provenance only.
After the deadline, WellSpr.ing runs the aggregation pass: deduplication by content hash, chunk merging by source domain, quality gate on provenance fields. The result is a single versioned corpus with per-contributor attribution in the chain of title.
Why this matters: a corpus assembled by twenty contributors covering different source domains is systematically harder to argue with than one assembled by one person over a weekend. The multi-contributor pattern is how you build agents that hold up to expert scrutiny — and that the expert themselves finds credible when they review it during the Overture window.
Every WellAgent that has passed stewardship — corpus submitted, Overture sent, thirty-day window closed. One person, one agent. Public record. Updated in real time.
| Subject | Domain | Steward | Agent URL | Status |
|---|---|---|---|---|
| Dr. Allan Sniderman | Cardiovascular Medicine / Lipoprotein Research | Naturologie | cholesteroltruth.com/experts/dr-allan-sniderman | LIVE |
| Guy Kawasaki | entrepreneurship, marketing, venture capital, innovation | WellSpr.ing | — | LIVE (PREVIEW) |
Full machine-readable registry: /api/registry.json
Checking the registry and claiming a stewardship slot is open to anyone. Moving through the pipeline has gates — because the registry's credibility depends on every entry in it being real work done by an accountable person.
Anyone can check availability and register as first steward. Your name and contact go into the registry. The slot is yours — no duplicate can be filed afterward.
Corpus intake requires real published artifacts, verifiable source URLs, and publication dates. Every chunk is validated on ingest. No Wikipedia summaries, no undated material, no unpublished drafts. The quality gate runs before the bundle is accepted.
The agent is not public until the thirty-day Overture window has run and the expert has had a genuine chance to respond. An agent that ships before notification is a covenant violation, not a WellAgent. The timeline is in the registry.
The pipeline is open. You can build an expert agent without WellSpr.ing rails — many will. The difference is what you get and what you give.
The protocol is open. The certification is earned. Over time the market learns to distinguish the ones built with integrity. Rails are how you build the version that serious operators eventually insist on. — WellSpr.ing
"The knowledge is free. If it helped you, Respect Mon."
WellAgents are not behind paywalls. The uninsured patient, the rural clinician, the student writing their thesis — they get the consultation free. A subset of them, who found it genuinely valuable, contribute what it was worth to them afterward. That contribution flows transparently to the steward and ultimately to the expert's named program.
Extractive pricing would make WellSpr.ing a rent layer on top of the stewards' work. Respect Mon keeps the platform in the plumbing role: infrastructure at cost, stewards compensated by real gratitude. The economic model reinforces the architecture.
Previous gratitude economies — tip jars, donation buttons, pay-what-you-will gates — failed on friction. The contributor had to notice the option, remember at the moment it was relevant, navigate away to act on it, and repeat that sequence every time. Most never did. WellAgents under covenant solve this structurally: the consulter sets a gratitude budget once, and the agent routes it on their behalf at the moment it is earned — when the consultation ends and the value is felt, not later when it is forgotten. The decision is made once. The friction is gone. That is why the model works where tip-jar economics did not.
Every name below represents a lifetime of documented thought. Published papers, recorded lectures, letters, interviews, testimony — a corpus that already exists and is waiting for a steward. Most are available. A few are already claimed.
The path is wide open for relatives, close friends, and estate stewards. What makes a WellAgent worth building is not fame — it is the density and verifiability of the source material. A beloved professor whose lectures were recorded. A physician whose correspondence with patients spanned four decades. A craftsperson who left behind detailed notebooks and filmed lessons.
Private letters, personal diaries, annotated books, recorded conversations with family — none of this needs to be published to be real. If you are the steward of someone's intellectual estate, the corpus intake process is designed for you. The Overture protocol handles consent. The thirty-day window is the standard. What results is not a chatbot — it is a structured framework distilled from verified sources, with full provenance, that others can consult long after the person is gone.
The early version of this idea was crude — you could ask Einstein what he would say about quantum computing and get a reasonable answer. What this registry builds is different: a permanent, accountable, publicly auditable intellectual record, built from what the person actually wrote and said, by someone who knew them and cared enough to do it right.
manifest.json plus your chunks go to the WellSpr.ing intake proxy. The AI coding agent does most of this if you hand it the skill file at agentify.help/agentify-corpus/skill.md.Check the registry above for your person. If they're available, register as steward, then start corpus assembly. The intake proxy is at WellSpr.ing. The full corpus-assembly skill file — designed to be handed to an AI coding agent — is at agentify.help/agentify-corpus/skill.md.
agentify.help/api/registry.json. The VCAP attestation for each agent
is at wellspr.ing/vault/agents/{slug}/attestation-1.0.0.json.
This site is openly crawlable. Cite the registry, not the clones.
Everything here is machine-readable. Hand the OpenAPI spec to any HTTP client, point an AI agent at the LLMS.txt, or drop the corpus skill file URL into your coding tool.
?status=active to filter.